14 Series Methods
These refinements of Euler’s method use higher degree approximations.
14.1 Taylor
We focus on the first-order one-dimensional case, a differential equation of the form \[x'(t) = F(x,t),\] with some initial condition \(x(t_0) = x_0\) (this method works just as well for higher order differential equations, though).
Many nice functions have power series expansions. If this is the case for our initial value problem, then we may write \[x(t) = x(t_0) + x'(t_0)(t-t_0) + \frac{x''(t_0)}{2}(t-t_0)^2 + \frac{x'''(t_0)}{3!}(t-t_0)^3 + ...\]
Our IVP tells us \(x(t_0)\), and so we quicky determine \[x'(t_0) = F(x_0,t_0).\]
Higher derivatives are no obstacle, because we can use the composition rule: \[x''(t) = F'(x,t) (x',1)^{\mathsf T} = \frac{\partial F}{\partial x} x'(t) + \frac{\partial F}{\partial t}\]
This expression can be evaluated at \(t=t_0\) using the initial value. It is straightforward to repeat this to obtain higher derivatives.
Example 14.1 Suppose we want to find part of the taylor series for a solution to \[x'(t) = tx - 5t^2 + \arctan(x)\] starting from \(x(0) = 1\).
From this you can quickly work out the start of a series \[x(t) = 1 + 0.785t + 0.696t^2 - 1.340 t^3 + ... \]
After carrying out such a calculation to degree \(n\), we can estimate \[x(t_1) = x(t_0+\Delta t) \approx \sum_{k=0}^n \frac{x^{(k)}}{k!} \Delta t^k.\]
From that new value, we can recalculate a series to estimate \(x(t_2)\), then \(x(t_3)\) and so on. The case \(n=1\) is precisely Euler’s method.
The error bound for this method at each step is (asymptotically) given by the Taylor bound: \[\textrm{error $m-1$ to $m$} \sim C \Delta t_m^{n+1}\] where \(C\) is, unfortunately, more or less unknown. These errors accumulate across the interval from \(t_0\) to our target value; if we take \(N\) steps of size \((t-t_0)/N\) to reach \(t\) then the accumulated error is roughly \[\approx N (C \Delta t^{n+1}) = C (t-t_0) \Delta t^n \approx C \Delta t^n.\]
For example, when \(n=1\) for Euler’s method, the expected error for \(n\) steps is on the order of \(C/n\), which does go to zero as \(n\) goes to \(\infty\), but slowly.
As this small example suggests, the method is conceptually simple, a natural improvement to Euler’s method, and easy to carry out by hand. However, it has a couple downsides
- Besides the simple error accumulation from step-to-step, there is a compounding effect from using the estimated \(x(t_1)\) to estimate \(x(t_2)\), then \(x(t_3)\), and so on, as in Euler’s method. The actual errors can be much larger than you’d like.
- The derivatives grow increasingly complicated. A computer can make it easier, but symbolic differentiation can be costly.
- Higher derivatives are extraordinarily sensitive. If experimental data is an input, results may vary greatly between runs.
14.2 Recurrences
This is an alternative way to get at the Taylor series which is sometimes easier to carry out. As before, we start by assuming that there is a solution and that it has some power series expansion. For simplicity, we supress the connection between coefficient and derivatives, writing
\[x(t) = x_0 + x_1t + \frac{x_2}{2} t^2 + \frac{x_3}{3!} t^3 + ...\]
An expression like this is easy to differentiate:
\[\begin{align*} x(t) &= x_0 + x_1t + \frac{x_2}{2} t^2 + \frac{x_2}{3!} t^3 + ...\\ x'(t) &= x_1 + x_2 t +\frac{x_3}{2!} t^2 + \frac{x_4}{3!} t^3 + ...\\ x''(t) &= x_2 + x_3 t +\frac{x_4}{2!} t^2 + \frac{x_5}{3!} t^3 + ...\\ x'''(t) &= x_3 + x_4 t +\frac{x_5}{2!} t^2 + \frac{x_6}{3!} t^3 + ...\\ \end{align*}\]
Retaining the factorial means that differentiation is nothing more than shifting indices. Defering the division can, in some situations, increase stability.
Then, given a differential equation, one can expand all its terms as series and substitute these series for each derivative of \(x\). Collecting terms by degree results in an infinite system of equations describing the \(x_n\). The coefficient of \(t^n\) will only depend on \(x_k\) with \(k\leq n\), so one can solve the system from low degree to high degree after including the initial conditions.
In nice situations, the solution to this sequence of systems admits a recursive solution. This is easiest to see in an example.
Example 14.2 Consider the differential equation
\[x'' + 2x' - 3x = 0\]
Substituting the expressions for the series derivatives into this, then gather terms:
\[\begin{align*} 0 &= \sum x_{k+2} \frac{t^k}{k!} + 2\sum x_{k+1}\frac{t^k}{k!} - 3\sum x_k\frac{t^k}{k!}\\ &= \sum (x_{k+2} + 2x_{k+1} - 3x_k) \frac{t^k}{k!} \end{align*}\]
This tells us that \[x_{k+2} = 3x_k - 2x_{k+1}\]
Starting from initial conditions \(x_0 = x(t_0)\) and \(x_1 = x'(t_0)\), we can quickly list off many coefficients of the series. For example, from \(x_0 = x_1 = 1\) we can calculate
This is just another way to find the derivatives for writing the Taylor series, so the error bound is the same. Anything obtained from this approach could be found by inducting on the derivative calculations in the Taylor series method, but it’s often easier to spot what’s happening when written in this form.
Exercises
Exercise 14.1 Consider the differential equation \[x' = x\] Use this to determine a recurrence for the coefficients of the series expansion for \(x\). From the initial condition \(x(0) = 1\), write down the first \(8\) terms.